Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
efficient inference optimization
# efficient inference optimization
Yarn Mistral 7B 128k AWQ
Apache-2.0
Yarn Mistral 7B 128K is an advanced language model optimized for long-context processing, further pre-trained on long-context data using the YaRN extension method, supporting a 128k token context window.
Large Language Model
Transformers
English
Y
TheBloke
483
72
Featured Recommended AI Models
Empowering the Future, Your AI Solution Knowledge Base
English
简体中文
繁體中文
にほんご
© 2025
AIbase